由于选择偏差,观察数据估算平均治疗效果(ATE)是有挑战性的。现有作品主要以两种方式应对这一挑战。一些研究人员建议构建满足正交条件的分数函数,该函数确保已建立的估计量“正交”更加健壮。其他人探索表示模型,以实现治疗组和受控群体之间的平衡表示。但是,现有研究未能进行1)在表示空间中歧视受控单元以避免过度平衡的问题; 2)充分利用“正交信息”。在本文中,我们提出了一个基于最新协变量平衡表示方法和正交机器学习理论的中等平衡的表示学习(MBRL)框架。该框架可保护表示形式免于通过多任务学习过度平衡。同时,MBRL将噪声正交性信息纳入培训和验证阶段,以实现更好的ATE估计。与现有的最新方法相比,基准和模拟数据集的全面实验表明,我们方法对治疗效应估计的优越性和鲁棒性。
translated by 谷歌翻译
经济学和医疗保健方面的许多实际决策问题寻求从观察数据中估算平均治疗效果(ATE)。双重/辩护的机器学习(DML)是观察性研究中估计吃量的普遍方法之一。但是,DML估计器可能会遇到错误的问题,甚至在倾向分数被弄错或非常接近0或1时进行极端估计。现有文献从理论的角度解决了这个问题。在本文中,我们提出了一种健壮的因果学习(RCL)方法,以抵消DML估计量的缺陷。从理论上讲,RCL估计量i)与DML估计器一样一致且双重稳健,ii)可以摆脱错误混合问题。从经验上讲,全面的实验表明,i)RCL估计器比DML估计器给出了因果参数的稳定估计,ii)RCL估计器在模拟和基准标准数据集上应用不同的机器学习模型时,RCL估计器优于传统估计器及其变体。 。
translated by 谷歌翻译
推荐系统(RS)是一个重要的在线应用程序,每天都会影响数十亿个用户。主流RS排名框架由两个部分组成:多任务学习模型(MTL),该模型可预测各种用户反馈,即点击,喜欢,分享和多任务融合模型(MTF),该模型(MTF)结合了多任务就用户满意度而言,输出分为最终排名得分。关于融合模型的研究并不多,尽管它对最终建议作为排名的最后一个关键过程有很大的影响。为了优化长期用户满意度,而不是贪婪地获得即时回报,我们将MTF任务作为Markov决策过程(MDP),并在推荐会话中提出,并建议基于批处理加固学习(RL)基于多任务融合框架(BATCHRL-MTF)包括批处理RL框架和在线探索。前者利用批处理RL从固定的批处理数据离线学习最佳推荐政策,以达到长期用户满意度,而后者则探索了潜在的高价值动作在线,以突破本地最佳难题。通过对用户行为的全面调查,我们通过从用户粘性和用户活动性的两个方面的微妙启发式方法对用户满意度进行了建模。最后,我们对十亿个样本级别的现实数据集进行了广泛的实验,以显示模型的有效性。我们建议保守的离线政策估计器(保守 - 访问器)来测试我们的模型离线。此外,我们在真实推荐环境中进行在线实验,以比较不同模型的性能。作为成功在MTF任务中应用的少数批次RL研究之一,我们的模型也已部署在一个大规模的工业短视频平台上,为数亿用户提供服务。
translated by 谷歌翻译
Causal learning is the key to obtaining stable predictions and answering \textit{what if} problems in decision-makings. In causal learning, it is central to seek methods to estimate the average treatment effect (ATE) from observational data. The Double/Debiased Machine Learning (DML) is one of the prevalent methods to estimate ATE. However, the DML estimators can suffer from an \textit{error-compounding issue} and even give extreme estimates when the propensity scores are close to 0 or 1. Previous studies have overcome this issue through some empirical tricks such as propensity score trimming, yet none of the existing works solves it from a theoretical standpoint. In this paper, we propose a \textit{Robust Causal Learning (RCL)} method to offset the deficiencies of DML estimators. Theoretically, the RCL estimators i) satisfy the (higher-order) orthogonal condition and are as \textit{consistent and doubly robust} as the DML estimators, and ii) get rid of the error-compounding issue. Empirically, the comprehensive experiments show that: i) the RCL estimators give more stable estimations of the causal parameters than DML; ii) the RCL estimators outperform traditional estimators and their variants when applying different machine learning models on both simulation and benchmark datasets, and a mimic consumer credit dataset generated by WGAN.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
The surrogate loss of variational autoencoders (VAEs) poses various challenges to their training, inducing the imbalance between task fitting and representation inference. To avert this, the existing strategies for VAEs focus on adjusting the tradeoff by introducing hyperparameters, deriving a tighter bound under some mild assumptions, or decomposing the loss components per certain neural settings. VAEs still suffer from uncertain tradeoff learning.We propose a novel evolutionary variational autoencoder (eVAE) building on the variational information bottleneck (VIB) theory and integrative evolutionary neural learning. eVAE integrates a variational genetic algorithm into VAE with variational evolutionary operators including variational mutation, crossover, and evolution. Its inner-outer-joint training mechanism synergistically and dynamically generates and updates the uncertain tradeoff learning in the evidence lower bound (ELBO) without additional constraints. Apart from learning a lossy compression and representation of data under the VIB assumption, eVAE presents an evolutionary paradigm to tune critical factors of VAEs and deep neural networks and addresses the premature convergence and random search problem by integrating evolutionary optimization into deep learning. Experiments show that eVAE addresses the KL-vanishing problem for text generation with low reconstruction loss, generates all disentangled factors with sharp images, and improves the image generation quality,respectively. eVAE achieves better reconstruction loss, disentanglement, and generation-inference balance than its competitors.
translated by 谷歌翻译
Surgical robot automation has attracted increasing research interest over the past decade, expecting its huge potential to benefit surgeons, nurses and patients. Recently, the learning paradigm of embodied AI has demonstrated promising ability to learn good control policies for various complex tasks, where embodied AI simulators play an essential role to facilitate relevant researchers. However, existing open-sourced simulators for surgical robot are still not sufficiently supporting human interactions through physical input devices, which further limits effective investigations on how human demonstrations would affect policy learning. In this paper, we study human-in-the-loop embodied intelligence with a new interactive simulation platform for surgical robot learning. Specifically, we establish our platform based on our previously released SurRoL simulator with several new features co-developed to allow high-quality human interaction via an input device. With these, we further propose to collect human demonstrations and imitate the action patterns to achieve more effective policy learning. We showcase the improvement of our simulation environment with the designed new features and tasks, and validate state-of-the-art reinforcement learning algorithms using the interactive environment. Promising results are obtained, with which we hope to pave the way for future research on surgical embodied intelligence. Our platform is released and will be continuously updated in the website: https://med-air.github.io/SurRoL/
translated by 谷歌翻译
Brain midline shift (MLS) is one of the most critical factors to be considered for clinical diagnosis and treatment decision-making for intracranial hemorrhage. Existing computational methods on MLS quantification not only require intensive labeling in millimeter-level measurement but also suffer from poor performance due to their dependence on specific landmarks or simplified anatomical assumptions. In this paper, we propose a novel semi-supervised framework to accurately measure the scale of MLS from head CT scans. We formulate the MLS measurement task as a deformation estimation problem and solve it using a few MLS slices with sparse labels. Meanwhile, with the help of diffusion models, we are able to use a great number of unlabeled MLS data and 2793 non-MLS cases for representation learning and regularization. The extracted representation reflects how the image is different from a non-MLS image and regularization serves an important role in the sparse-to-dense refinement of the deformation field. Our experiment on a real clinical brain hemorrhage dataset has achieved state-of-the-art performance and can generate interpretable deformation fields.
translated by 谷歌翻译